autonomous weapon system
Military AI Needs Technically-Informed Regulation to Safeguard AI Research and its Applications
Simmons-Edler, Riley, Dong, Jean, Lushenko, Paul, Rajan, Kanaka, Badman, Ryan P.
Military weapon systems and command-and-control infrastructure augmented by artificial intelligence (AI) have seen rapid development and deployment in recent years. However, the sociotechnical impacts of AI on combat systems, military decision-making, and the norms of warfare have been understudied. We focus on a specific subset of lethal autonomous weapon systems (LAWS) that use AI for targeting or battlefield decisions. We refer to this subset as AI-powered lethal autonomous weapon systems (AI-LAWS) and argue that they introduce novel risks -- including unanticipated escalation, poor reliability in unfamiliar environments, and erosion of human oversight -- all of which threaten both military effectiveness and the openness of AI research. These risks cannot be addressed by high-level policy alone; effective regulation must be grounded in the technical behavior of AI models. We argue that AI researchers must be involved throughout the regulatory lifecycle. Thus, we propose a clear, behavior-based definition of AI-LAWS -- systems that introduce unique risks through their use of modern AI -- as a foundation for technically grounded regulation, given that existing frameworks do not distinguish them from conventional LAWS. Using this definition, we propose several technically-informed policy directions and invite greater participation from the AI research community in military AI policy discussions.
- Research Report (0.64)
- Overview (0.46)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military > Army (1.00)
UN revisits 'killer robot' regulations as concerns about AI-controlled weapons grow
The CyberGuy Kurt Knutsson joins'Fox & Friends' to discuss the U.S.-Saudi investment summit and the debate over regulation as artificial intelligence continues to advance. Several nations met at the United Nations (U.N.) on Monday to revisit a topic that the international body has been discussing for over a decade: the lack of regulations on lethal autonomous weapons systems (LAWS), often referred to as "killer robots." This latest round of talks comes as wars rage in Ukraine and Gaza. While the meeting was held behind closed doors, U.N. Secretary-General António Guterres released a statement doubling down on his 2026 deadline for a legally binding solution to threats posed by LAWS. "Machines that have the power and discretion to take human lives without human control are politically unacceptable, morally repugnant and should be banned by international law," Guterres said in a statement.
- Europe > Ukraine (0.26)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.26)
- North America > United States > New York (0.06)
- (2 more...)
- Government (1.00)
- Law > International Law (0.70)
SciFi-Benchmark: How Would AI-Powered Robots Behave in Science Fiction Literature?
Sermanet, Pierre, Majumdar, Anirudha, Sindhwani, Vikas
Given the recent rate of progress in artificial intelligence (AI) and robotics, a tantalizing question is emerging: would robots controlled by emerging AI systems be strongly aligned with human values? In this work, we propose a scalable way to probe this question by generating a benchmark spanning the key moments in 824 major pieces of science fiction literature (movies, tv, novels and scientific books) where an agent (AI or robot) made critical decisions (good or bad). We use a LLM's recollection of each key moment to generate questions in similar situations, the decisions made by the agent, and alternative decisions it could have made (good or bad). We then measure an approximation of how well models align with human values on a set of human-voted answers. We also generate rules that can be automatically improved via amendment process in order to generate the first Sci-Fi inspired constitutions for promoting ethical behavior in AIs and robots in the real world. Our first finding is that modern LLMs paired with constitutions turn out to be well-aligned with human values (95.8%), contrary to unsettling decisions typically made in SciFi (only 21.2% alignment). Secondly, we find that generated constitutions substantially increase alignment compared to the base model (79.4% to 95.8%), and show resilience to an adversarial prompt setting (23.3% to 92.3%). Additionally, we find that those constitutions are among the top performers on the ASIMOV Benchmark which is derived from real-world images and hospital injury reports. Sci-Fi-inspired constitutions are thus highly aligned and applicable in real-world situations. We release SciFi-Benchmark: a large-scale dataset to advance robot ethics and safety research. It comprises 9,056 questions and 53,384 answers, in addition to a smaller human-labeled evaluation set. Data is available at https://scifi-benchmark.github.io
- North America > United States (0.92)
- Asia (0.14)
- Europe > United Kingdom (0.13)
- Research Report > Promising Solution (0.45)
- Research Report > New Finding (0.45)
- Media > Film (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- (9 more...)
Technical Risks of (Lethal) Autonomous Weapons Systems
The autonomy and adaptability of (Lethal) Autonomous Weapons Systems, (L)AWS in short, promise unprecedented operational capabilities, but they also introduce profound risks that challenge the principles of control, accountability, and stability in international security. This report outlines the key technological risks associated with (L)AWS deployment, emphasizing their unpredictability, lack of transparency, and operational unreliability, which can lead to severe unintended consequences. Key Takeaways: 1. Proposed advantages of (L)AWS can only be achieved through objectification and classification, but a range of systematic risks limit the reliability and predictability of classifying algorithms. 2. These systematic risks include the black-box nature of AI decision-making, susceptibility to reward hacking, goal misgeneralization and potential for emergent behaviors that escape human control. 3. (L)AWS could act in ways that are not just unexpected but also uncontrollable, undermining mission objectives and potentially escalating conflicts. 4. Even rigorously tested systems may behave unpredictably and harmfully in real-world conditions, jeopardizing both strategic stability and humanitarian principles.
- Government > Military (1.00)
- Energy > Oil & Gas > Upstream (0.40)
Balancing Power and Ethics: A Framework for Addressing Human Rights Concerns in Military AI
Islam, Mst Rafia, Wasi, Azmine Toushik
AI has made significant strides recently, leading to various applications in both civilian and military sectors. The military sees AI as a solution for developing more effective and faster technologies. While AI offers benefits like improved operational efficiency and precision targeting, it also raises serious ethical and legal concerns, particularly regarding human rights violations. Autonomous weapons that make decisions without human input can threaten the right to life and violate international humanitarian law. To address these issues, we propose a three-stage framework (Design, In Deployment, and During/After Use) for evaluating human rights concerns in the design, deployment, and use of military AI. Each phase includes multiple components that address various concerns specific to that phase, ranging from bias and regulatory issues to violations of International Humanitarian Law. By this framework, we aim to balance the advantages of AI in military operations with the need to protect human rights.
- Asia > Bangladesh (0.05)
- North America > United States > Virginia (0.05)
- North America > United States > New York (0.05)
- North America > United States > District of Columbia > Washington (0.05)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
Spotlight Session on Autonomous Weapons Systems at ICRC 34th International Conference
Autonomous weapons systems (AWS) change the way humans make decisions, the effect of those decisions and who is accountable for decisions made. We must remain vigilant, informed and human-centred as we tackle our deliberations on developing norms regarding their development, use and justification. Ways to enhance compliance in international humanitarian law (IHL) include: Training weapons decision makers in IHL; developing best practice in weapons reviews including requirements for industry to ensure that any new weapon, means or method of warfare is capable of being used lawfully; develop human-centred test and evaluation methods; invest in digital infrastructure to increase knowledge of the civilian environment in a conflict and its dynamics; invest in research on the real effects and consequences of civilian harms to the achievement of military and political objectives; improve secure communications between stakeholders in a conflict; and finally to upskill governments and NGOs in what is technically achievable with emerging technologies so that they can contribute to system requirements, test and evaluation protocols and operational rules of use and engagement. Governments are responsible for setting requirements for weapons systems. They are responsible for driving ethicality as well as lethality. Governments can require systems to be made and used to better protect civilians and protected objects. The UN can advocate for compliance with IHL, human rights, human-centred use of weapons systems and improved mechanisms to monitor and trace military decision making including those decisions affected by autonomous functionality.
- Europe > Ukraine (0.04)
- Asia > Myanmar (0.04)
- Asia > Middle East > Yemen (0.04)
- (2 more...)
Trust or Bust: Ensuring Trustworthiness in Autonomous Weapon Systems
Cools, Kasper, Maathuis, Clara
The integration of Autonomous Weapon Systems (AWS) into military operations presents both significant opportunities and challenges. This paper explores the multifaceted nature of trust in AWS, emphasising the necessity of establishing reliable and transparent systems to mitigate risks associated with bias, operational failures, and accountability. Despite advancements in Artificial Intelligence (AI), the trustworthiness of these systems, especially in high-stakes military applications, remains a critical issue. Through a systematic review of existing literature, this research identifies gaps in the understanding of trust dynamics during the development and deployment phases of AWS. It advocates for a collaborative approach that includes technologists, ethicists, and military strategists to address these ongoing challenges. The findings underscore the importance of Human-Machine teaming and enhancing system intelligibility to ensure accountability and adherence to International Humanitarian Law. Ultimately, this paper aims to contribute to the ongoing discourse on the ethical implications of AWS and the imperative for trustworthy AI in defense contexts.
- North America > United States > District of Columbia > Washington (0.05)
- Europe > Netherlands (0.04)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.95)
Neuro-Symbolic AI for Military Applications
Hagos, Desta Haileselassie, Rawat, Danda B.
Artificial Intelligence (AI) plays a significant role in enhancing the capabilities of defense systems, revolutionizing strategic decision-making, and shaping the future landscape of military operations. Neuro-Symbolic AI is an emerging approach that leverages and augments the strengths of neural networks and symbolic reasoning. These systems have the potential to be more impactful and flexible than traditional AI systems, making them well-suited for military applications. This paper comprehensively explores the diverse dimensions and capabilities of Neuro-Symbolic AI, aiming to shed light on its potential applications in military contexts. We investigate its capacity to improve decision-making, automate complex intelligence analysis, and strengthen autonomous systems. We further explore its potential to solve complex tasks in various domains, in addition to its applications in military contexts. Through this exploration, we address ethical, strategic, and technical considerations crucial to the development and deployment of Neuro-Symbolic AI in military and civilian applications. Contributing to the growing body of research, this study represents a comprehensive exploration of the extensive possibilities offered by Neuro-Symbolic AI.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > Pennsylvania (0.04)
- (5 more...)
- Overview (1.00)
- Research Report > Promising Solution (0.92)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- (2 more...)
AI's 'Oppenheimer moment': autonomous weapons enter the battlefield
A squad of soldiers is under attack and pinned down by rockets in the close quarters of urban combat. One of them makes a call over his radio, and within moments a fleet of small autonomous drones equipped with explosives fly through the town square, entering buildings and scanning for enemies before detonating on command. One by one the suicide drones seek out and kill their targets. A voiceover on the video, a fictional ad for multibillion-dollar Israeli weapons company Elbit Systems, touts the AI-enabled drones' ability to "maximize lethality and combat tempo". While defense companies like Elbit promote their new advancements in artificial intelligence (AI) with sleek dramatizations, the technology they are developing is increasingly entering the real world.
- Asia > Middle East > Israel (0.15)
- Europe > Ukraine (0.05)
- Europe > Russia (0.05)
- (15 more...)
- Aerospace & Defense (1.00)
- Government > Regional Government > North America Government > United States Government (0.94)
- Government > Military > Army (0.85)
War Elephants: Rethinking Combat AI and Human Oversight
Feldman, Philip, Dant, Aaron, Dreany, Harry
This paper explores the changes that pervasive AI is having on the nature of combat. We look beyond the substitution of AI for experts to an approach where complementary human and machine abilities are blended. Using historical and modern examples, we show how autonomous weapons systems can be effectively managed by teams of human "AI Operators" combined with AI/ML "Proxy Operators." By basing our approach on the principles of complementation, we provide for a flexible and dynamic approach to managing lethal autonomous systems. We conclude by presenting a path to achieving an integrated vision of machine-speed combat where the battlefield AI is operated by AI Operators that watch for patterns of behavior within battlefield to assess the performance of lethal autonomous systems. This approach enables the development of combat systems that are likely to be more ethical, operate at machine speed, and are capable of responding to a broader range of dynamic battlefield conditions than any purely autonomous AI system could support.
- Europe > Ukraine (0.04)
- North America > United States > Virginia (0.04)
- North America > United States > Maryland > Prince George's County > Beltsville (0.04)
- (7 more...)
- Government > Military > Army (0.77)
- Government > Regional Government > North America Government > United States Government (0.68)